I am having an issue while changing a custom post type URL. Current post type URL is: http://example.com/product/product-slug where product is a custom post type. I want to change it to: http://domain.com/brand/brand-slug/product-slug where brand is the custom taxonomy. I found code that removes /product/ from the custom post type URL and it's working fine for me. I am using following code. function gp_remove_cpt_slug( $post_link, $post, $leavename ) { if ( 'product' != $post->post_type || 'publish' != $post->post_status ) { return …
# now we will read images from the folder segment the images and will produce the output for image_name in listdir('images'): counter = 1 # constructing the name of the file file_name = 'images/' + image_name # getting segmented images letters_in_image = image_segmentation(file_name) # sorting the letters so that letters that appear before is addressed first letters_in_image = sorted(letters_in_image, key=lambda x: x[0]) ans = "" for (x,y,w,h) in letters_in_image: image = cv2.imread(file_name,0) letter = image[y - 2:y + h + …
I have 2 sets of training data in csv files. The training data have class labels, 1 for memorable, and 0 for not memorable. In addition, there is also a confidence label for each sample. The class labels were assigned based on decisions from 3 people viewing the photos. When they all agreed, the class label could be considered certain, and a confidence of 1 was written down. If they didn't all agree, then the classification decided on by the …
I am working on some algorithm that is comparing results with other model using confidence interval , 90%. Can this be said a statistical test ? I read a article where it said about statistical test with some confidence level. Is confidence level same as confidence interval in statistical tests ?
I'm trying to model a random threshold as a weight,the threshold should help the error to decrease, the weights are not random, they are 1. It's possible to change the threshold so that the error will be 0? import numpy as np # input dataset X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) # output dataset y = np.array([[0, 0, 0, 1]]).T syn0 = np.zeros((2, 1)) + 1 threshold = np.random.random_integers(-5, 5) # forward propagation l0 = X …
I can't add new plugins or themes on a wordpress multisite network from the primary site as super administrator. It shows this error: ERR_TOO_MANY_REDIRECTS. However, I am able to access all pages, login to each site's dashboard or even make changes to existing codes for installed plugins or themes. I am not able to install any new ones and the network is stuck in a loop. However, the problem can be avoided if I turn off the multisite option using …
I am using DO Droplet 2gb RAM Shared CPU running apache. I cant upload files larger than 100MB in the Media Uploader but I can upload LOWER than 100mb. I already tried modifying php.ini and other stuffs like htaccess and wp-config. @ini_set( 'upload_max_filesize' , '512M' ); @ini_set( 'post_max_size', '512M'); @ini_set( 'memory_limit', '512M' ); @ini_set( 'max_execution_time', '300' ); @ini_set( 'max_input_time', '300' ); php.ini is also modified like the above. Media uploader on backend shows I can upload 512M but its just …
I need to list all categories and their respective posts. Each taxonomy has an image created by ACF in jetengine plugin. I created the loop that returns the list of categories and posts for each category. But I can't capture the custom field from the taxonomy image. Can anyone give me some tips on how to do it? foreach($custom_terms as $custom_term) { wp_reset_query(); $args = array('post_type' => 'post', 'tax_query' => array( array( 'taxonomy' => 'custom_tax', 'field' => 'slug', 'terms' => …
I have a problem with an extremely large dataset (who doesn't?) which is stored in chunks such that there is low variance across chunks (i.e., the chunks are sort of representative). I wanted to play around with algorithms to do some classification in an asynchronous fashion but I wanted to code it up myself. A sample code would look like start a master distribute 10 chunks on 10 slaves while some criterion is not met for each s in slave: …
My current situation is: I'm redirecting the custom category request to below kind of url: http://my_url/news/category/abc I've managed to add my custom query var category and can get the query var now. The hook is as below: add_filter('rewrite_rules_array', function ($rules) { return ['^news/?$' => 'index.php?post_type=news&page=1&category=all'] + ['^news/category/(.+)/?$' => 'index.php?post_type=news&category=$matches[1]'] + $rules; }); Url like http://my_url/news/category/abc now goes to archive-news.php and list all posts which has custom category abc. And url like http://my_url/news/lalala goes to single-news.php and display full content of …
I have created one api with post request. register_rest_route('myapi/v1', '/post_flyer', array( 'methods' => 'POST', 'callback' => 'api_post_flyer', )); I have to submit form data with <iframe></iframe> or <script></script> tag in post content. When i am try to check this api using postman, every time its display the error like "No route found", "code": "rest_no_route", "message": "No route was found matching the URL and request method", "data": { "status": 404 } While whole functionality is there, So i want to know …
I have a custom post type with a custom taxonomy (to show some "best practice" examples on my website). On the single-post-page (single-bestpractice.php) I wanted to show all the terms (categories) like this: Parent: Child, Child, Child I tried this code: $customPostTaxonomies = get_object_taxonomies('bestpractice'); if (count($customPostTaxonomies) > 0) { foreach ($customPostTaxonomies as $tax) { $args = array( 'orderby' => 'name', 'show_count' => 0, 'pad_counts' => 0, 'hierarchical' => 1, 'taxonomy' => $tax, 'title_li' => '' ); wp_list_categories( $args ); } …
I’m giving my first steps with AI and Machine Learning so I have the following issue. I’m trying to predict an outcome from COVID-19 number of day vs confirmed cases using scikit-learn library. I mean, my input is the number of days since the pandemic started in my country and my output is the number of confirmed cases in that corresponding date. However both using GradientBoosting and RandomForest I get the same output values for the test values…I post below …
Is there any rule that we should use only deconvolution operations in decoder block of auto encoder network or we can use convolution in such way that it up-samples or mirrors the corresponding operation in the encoder block of auto encoder network?
I have implemented the codes: https://towardsdatascience.com/image-feature-extraction-using-pytorch-e3b327c3607a?gi=7b5fd7b03ed1 for image feature extraction. But it is confusing that both 224*224 input image 448*448 input image work fine. As I understand, pretained VGG16 (without changing its trained weights) only takes 224*224 input image. I suppose the 1st layer (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) can take larger size of images, but the pretrained weights cannot extend to larger dimension of inputs. Am I right?
I've got a WordPress install where I'm trying to change this content_width code: if( ! isset( $content_width ) ) $content_width = 290; in my functions.php file based on the users screen size. I've tried using CSS media queries, but for our particular use-case, I need to be able to change this in the functions.php file based on the users screen size. Ideally 1080 for Desktop, 720 for tables, and 290 for mobile. Any ideas?
I am redesigning some of the classical algorithms for Hadoop/MapReduce framework. I was wondering if there any established approach for denoting Big(O) kind of expressions to measure time complexity? For example, hypothetically, a simple average calculation of n (=1 billion) numbers is O(n) + C operation using simple for loop, or O(log) I am assuming division to be a constant time operation for the sake for simplicity. If i break this massively parallelizable algorithm for MapReduce, by dividing data over …
The post.php screen lists posts vertically in the admin area [and other post types]. Each item's post_title is a hyperlink, directly above some inline choices. Both the title link, and the "Edit" inline selection, redundantly go to the same destination, namely: http[s]://{yoursite.tld}/wp-admin/post.php?post={postID}&action=edit I want to filter the post_title URL href using PHP. Specifically, I want to set a custom destination to the URL, based on metadata stored in the post. i.e post_ID:123 has post meta data: {'custom_URL' : 'http:somesite.com'} In …
exp = explainer.explain_instance(df_val_final.Description[idx],predproba_list,num_features=5, top_labels=2) While executing the explain instance of LimeTextExplainer, the above statement keeps on executing continuously with the below warning message. Execution stops only if I interrupt the kernel C:\ProgramData\Anaconda3\lib\site-packages\fastai\torch_core.py:83: UserWarning: Tensor is int32: upgrading to int64; for better performance use int64 input warn('Tensor is int32: upgrading to int64; for better performance use int64 input') C:\ProgramData\Anaconda3\lib\site-packages\fastai\torch_core.py:83: UserWarning: Tensor is int32: upgrading to int64; for better performance use int64 input warn('Tensor is int32: upgrading to int64; for better performance …
After reading several posts here (including [1] and [2]) and testing with Wordpress 4.5.3 using Twenty Sixteen as parent theme, I think the following (in functions.php) code must be correct: function childtheme_enqueue_styles() { $parent_style = 'twentysixteen-style'; wp_enqueue_style( $parent_style, get_template_directory_uri() . '/style.css' ); wp_enqueue_style( 'childtheme-style', get_stylesheet_directory_uri() . '/style.css', array( $parent_style ), wp_get_theme()->get('Version') ); } add_action( 'wp_enqueue_scripts', 'childtheme_enqueue_styles' ); It loads the parent stylesheet (with parent theme version number) and then the child theme stylesheet (with child theme version number). This seems …
I know this question has been asked many times, and I've been trying every answer I came across all day from 11am-9pm and I am STUCK. Here is my search page http://tribute-software.com/development/?s=1991 As you can see the search right now is for 1991. I have some data printed on the page so it can help us debug. It is showing the correct 10 posts in the printed data, and it calculates the correct number of pages (2) -- (I set …
For a university project, I need to send text in Spanish via SMS. As these have a cost, I am trying to compress this text in an inefficient way. This consists of first generating a permutation of codes formed by two characters of many alphabets (fines, Cyrillic, etc.) to which I assign a word that has more than two characters (to say that it is being compressed). Then I take each word in a sentence and assign it its associated …
I have a dataset specific problem where i need to use a splitting function other than gini_index. This requires me to re-write a decision tree from scratch. I have a working model, but itis highly inefficient. To make a split i currently iterate though each feature and then through each unique datapoint in that dataset for each node (total of nodes x features x unique levels gini evaluations). Cause of this my DT on a 300k X 145 dataset has …
I am preparing a plugin for REST API integration (the WP 4.7 core version) by providing custom endpoints for data from the plugin. One of the things I am trying to do is to set a _links property in the response and make the linked resource embeddable. Heavily simplified code excerpt: class Clgs_REST_Logs extends WP_REST_Controller { const NSPACE = 'clgs'; const BASE = 'logs'; public function register_routes() { register_rest_route( self::NSPACE, '/' . self::BASE, array( array( 'methods' => WP_REST_Server::READABLE, 'callback' => …
When requesting the test API, getting the following response: { "errors": [ { "error_code": "", "error_message": "Sorry, you cannot list resources." } ] } The credentials are OK, REST API keys have both read/write permissions. I tried with newly generated API keys, same problem.
I'm trying to build an AI to play tictactoe(cs50ai pset0). i have built 7 essential functions for this purpose. 1.player function that takes as an argument a board and returns whose turn is it. actions function that takes as an argument a board and returns the possible actions on the board as a set of tubles. result function which takes an action and a board as arguments, and returns the new board caused of that action. winner function which takes …
I want to make a filter width more than one tanonomy width ajax and jquery, The demo i want to code is: http://gycweb.org/resources/ I have tried to code with this demo: http://dinhkk.com/demo/ajaxfilter/ The problem is i can send in formation from two menu on right sidebar to the ajax data at the same time can any one help me the solution ??
I know that you're supposed to scale your test data using the parameters (mean and stdev) from your training data. This is relatively simple; but what if the number of samples is limited in one training data set (e.g. Set A = 5 samples) so I want to combine two data sets (i.e. Set A + Set B = 10 samples) to have enough samples for training, what can I do so that I can scale/normalize the two sets into …
I have a data of a bag of words in a document. The data has 3 columns: {document number, word number, count of the word in the number}. I am supposed to generate frequent item-sets of a particular size. I thought that I would make list of all words that appear in a document, create a table of this list, and then generate frequent item-sets using Mlxtend or Orange . However, this approach does not seem to be efficient.
I have a custom post type, and have created an archive template, archive-custom_post_type.php, which includes a search form. I am then using pre_get_posts to add parameters to the query to filter my results. However, to make sure this only happens on this archive page, I want to check a few things. First I am checking if the post type matches. But then I wanted to check the is_search() parameter, only to see that it is false. How and when is …
I am currently working on an apparel recommendation system, where I have tabulated data containing a list of products with their respective metadata (brand, category, color etc.) I have an additional column of client ids to denote which client has bought which product. I want this content-based recommendation system to recommend a client a bunch of products, based on the metadata of the products they have purchased in the past. I am trying to find a way to learn user …
I'm trying to fix up my code to meet the WordPress VIP Coding Standards. I'm getting a couple of issues that I'd like to see go away, but i'm not sure what the best strategy is. The first issue is when i'm verifying a nonce while saving metabox data: $nonce = isset( $_POST['revv_meta_box_nonce'] ) ? $_POST['revv_meta_box_nonce'] : ''; The error i'm getting here is 'Processing data without nonce verification'. Which is pretty silly since i'm just storing the nonce in …
I am working on a Wordpress installation where we recently decoupled the frontend into a NextJS application that is no longer hosted on the same domain as the admin. We are accessing data etc through the API. That's been fine for non-logged in users viewing posts etc. But recently we realized that the "preview post" functionality has been broken, because users who are logged in on the admin side are no longer logged in on the frontend. So they can't …
I'm having a recipe site where every post naturally contains ingredients. Every ingredient is a tag, and I would like to automatically link every ingredient so the user can click on it and see all recipes that uses that certain ingredient. For this to be possible I guess I have to loop through every word in the post, then check if that word is equal to an existing tag, and wrap that word in a hyperlink. But I'm not sure …
I've implemented a SegNet and SegNet ReLU variant in PyTorch. I'm using it as a proof-of-concept for now, but what really bothers me is the noise produced by the network. With ADAM I seem to get slightly less noise, whereas with SGD the noise increases. I can see the loss going down and the cross-evaluation accuracy rising to 98%-99% and yet the noise is still there. On the left is the actual image, then you can see the mask, and …